60 research outputs found

    Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms

    Get PDF
    Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This paper surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further research in the future

    Typical-case complexity and the SAT competitions

    Get PDF
    The aim of this paper is to gather insight into typical-case complexity of the Boolean Satisfiability (SAT) problem by mining the data from the SAT competitions. Specifically, the statistical properties of the SAT benchmarks and their impact on complexity are investigated, as well as connections between different metrics of complexity. While some of the investigated properties and relationships are “folklore” in the SAT community, this study aims at scientifically showing what is true from the folklore and what is not

    Modeling the virtual machine allocation problem

    Get PDF
    Finding the right allocation of virtual machines (VM) in cloud data centers is one of the key optimization problems in cloud computing. Accordingly, many algorithms have been proposed for the problem. However, lacking a single, generally accepted formulation of the VM allocation problem, there are many subtle differences in the problem formulations that these algorithms address; moreover, in several cases, the exact problem formu- lation is not even defined explicitly. Hence in this paper, we present a comprehensive generic model of the VM allocation problem. We also show how the often-investigated problem variants fit into this general model

    GPGPU: Hardware/Software Co-Design for the Masses

    Get PDF
    With the recent development of high-performance graphical processing units (GPUs), capable of performing general-purpose computation (GPGPU: general-purpose computation on the GPU), a new platform is emerging. It consists of a central processing unit (CPU), which is very fast in sequential execution, and a GPU, which exhibits high degree of parallelism and thus very high performance on certain types of computations. Optimally leveraging the advantages of this platform is challenging in practice. We spotlight the analogy between GPGPU and hardware/software co-design (HSCD), a more mature design paradigm, to derive a design process for GPGPU. This process, with appropriate tool support and automation, will ease GPGPU design significantly. Identifying the challenges associated with establishing this process can serve as a roadmap for the future development of the GPGPU field

    Accelerating SAT solving with best-first-search

    Get PDF
    Solvers for Boolean satisfiability (SAT), like other algorithms for NP-complete problems, tend to have a heavy-tailed runtime distribution. Successful SAT solvers make use of frequent restarts to mitigate this problem by abandoning unfruitful parts of the search space after some time. Although frequent restarting works fairly well, it is a quite simplistic technique that does not do anything explicitly to make the next try better than the previous one. In this paper, we suggest a more sophisticated method: using a best-first-search approach to quickly move between different parts of the search space. This way, the search can always focus on the most promising region. We investigate empirically how the performance of frequent restarts, best-first-search, and a combination of the two compare to each other. Our findings indicate that the combined method works best, improving 36-43\% on the performance of frequent restarts on the used set of benchmark problems

    Determining the expected runtime of an exact graph coloring algorithm

    Get PDF
    Exact algorithms for graph coloring tend to have high vari- ance in their runtime, posing a signi�cant obstacle to their practical application. The problem could be mitigated by appropriate prediction of the runtime. For this purpose, we devise an algorithm to efficiently compute the expected run-time of an exact graph coloring algorithm as a function of the parameters of the problem instance: the graph's size, edge density, and the number of available colors. Specifically, we investigate the complexity of a typical backtracking algorithm for coloring random graphs with k colors. Using the expected size of the search tree as the measure of complexity, we devise a polynomial-time algorithm for predicting algorithm complexity depending on the parameters of the problem instance. Our method also delivers the expected number of solutions (i.e., number of valid colorings) of the given problem instance, which can help us decide whether the given problem instance is likely to be feasible or not. Based on our algorithm, we also show in accordance with previous results that increasing the number of vertices of the graph does not increase the complexity beyond some com- plexity limit. However, this complexity limit grows rapidly when the number of colors increases

    Average-case complexity of backtrack search for coloring sparse random graphs

    Get PDF
    We investigate asymptotically the expected number of steps taken by backtrack search for kk-coloring random graphs Gn,p(n)G_{n,p(n)} or proving non-kk-colorability, where p(n)p(n) is an arbitrary sequence tending to 0, and kk is constant. Contrary to the case of constant pp, where the expected runtime is known to be O(1)O(1), we prove that here the expected runtime tends to infinity. We establish how the asymptotic behaviour of the expected number of steps depends on the sequence p(n)p(n). In particular, for p(n)=d/np(n)=d/n, where dd is a constant, the runtime is always exponential, but it can be also polynomial if p(n)p(n) decreases sufficiently slowly, e.g. for p(n)=1/lnnp(n)=1/\ln n

    Accelerating backtrack search with a best-first-search strategy

    Get PDF
    Backtrack-style exhaustive search algorithms for NP-hard problems tend to have large variance in their runtime. This is because ``fortunate'' branching decisions can lead to finding a solution quickly, whereas ``unfortunate'' decisions in another run can lead the algorithm to a region of the search space with no solutions. In the literature, frequent restarting has been suggested as a means to overcome this problem. In this paper, we propose a more sophisticated approach: a best-first-search heuristic to quickly move between parts of the search space, always concentrating on the most promising region. We describe how this idea can be efficiently incorporated into a backtrack search algorithm, without sacrificing optimality. Moreover, we demonstrate empirically that, for hard solvable problem instances, the new approach provides significantly higher speed-up than frequent restarting
    corecore